Goto

Collaborating Authors

 moral sense


Can AI machines develop a moral sense?

FOX News

The Wall Street Journal's Gerry Baker weighs in on growing fears over the capabilities of artificial intelligence technology on'Your World.' FOX Business host Gerry Baker – who wrote an op-ed in Monday's Wall Street Journal, "Is There Anything ChatGPT's AI'Kant' Do?" – outlined the implications of the increased prevalence of artificial intelligence technology in modern society and the questions and fears AI sparks Tuesday on '"Your World." That we are creating these machines that in the end will come and control us, and tell us what we're going to do. What I was interested in looking at was not so much what machines can tell us about factual information, but whether or not it's possible these machines might develop any sort of a moral sense, might be able to tell us what's right or wrong. You can ask it all kinds of moral questions like, "Is it ever right to kill someone?" or "Is it ever right to tell a lie or things like that?" And it gives you kind of a mix of answers.


A Safe Ethical System for Intelligent Machines

Waser, Mark R. (Books International)

AAAI Conferences

As machines become more intelligent and take on more responsibilities, their decision-making capabilities must be informed and constrained by a coherent, integrated moral/ethical structure with no internal inconsistencies for everyone’s safety and well-being. Unfortunately, no such structure is currently agreed upon to exist. We propose to solve this problem by a) drawing upon experimental evidence and lessons learned from evolution and economics to show that morality is actually objective and derivable from first principles; b) presenting a coherent, integrated, platonic ethical system with no internal inconsistencies that flows naturally from a single high-level logically-derived Kantian imperative to low-level reflexive "rules of thumb" that match current human sensibilities; and c) suggesting a biologically-inspired architecture which supports and enforces this system which can be relatively easily implemented.